摘要 :
In this paper, the authors present a binary image compression scheme that can be used either for lossless or lossy compression requirements. This scheme contains five new contributions. The lossless component of the scheme partiti...
展开
In this paper, the authors present a binary image compression scheme that can be used either for lossless or lossy compression requirements. This scheme contains five new contributions. The lossless component of the scheme partitions the input image into a number of non-overlapping rectangles using a new line-by-line method. The upper-left and the lower-right vertices of each rectangle are identified and the coordinates of which are efficiently encoded using three methods of representation and compression. The lossy component, on the other hand, provides higher compression through two techniques. 1) It reduces the number of rectangles from the input image using our mathematical regression models. These mathematical models guarantees image quality so that rectangular reduction should not produce visual distortion in the image. The mathematical models have been obtained through subjective tests and regression analysis on a large set of binary images. 2) Further compression gain is achieved through discarding isolated pixels and Ⅰ-pixel rectangles from the image. Simulation results show that the proposed schemes provide significant improvements over previously published work for both the lossy and the lossless components.
收起
摘要 :
Similar images are images with common features, similar pixel distributions, and similar edge distnbutions. Fields such as medical imaging or satellite imaging often need to store large collections of similar images. In a set of s...
展开
Similar images are images with common features, similar pixel distributions, and similar edge distnbutions. Fields such as medical imaging or satellite imaging often need to store large collections of similar images. In a set of similar images the image similarities represent patterns that consistently appear across all images, this results in “set redundancy”. This paper presents the Centroid method that extracts and uses these similarity patterns to reduce set redundancy and achieve higher lossless compression in sets of similar images. Experimental results with a medical image database demonstrate that the Centroid method can deliver significantly improved image compression.
收起
摘要 :
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compre...
展开
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
收起
摘要 :
In this paper a new lossless binary text image coding technique based on overlapping partitioning is presented. In this technique, the black regions in the image are first partitioned into a number of overlapping and nonoverlappin...
展开
In this paper a new lossless binary text image coding technique based on overlapping partitioning is presented. In this technique, the black regions in the image are first partitioned into a number of overlapping and nonoverlapping rectangles. This partitioning algorithm gives, in general, fewer rectangles than those obtained using nonoverlapping partitioning. After partitioning, the two opposite vertices of each rectangle are compressed using a simple encoding technique. For binary text images (of different languages and fonts) the overlapping partitioning proposed here yields compression ratios better than the nonoverlapping partitioning. In addition, the proposed scheme is suitable for texts consisting of different languages, fonts and sizes.
收起
摘要 :
A new hybrid image compression-encryption algorithm based on compressive sensing is proposed, which can accomplish image encryption and compression simultaneously. The partial Hadamard matrix is adopted as measurement matrix, whic...
展开
A new hybrid image compression-encryption algorithm based on compressive sensing is proposed, which can accomplish image encryption and compression simultaneously. The partial Hadamard matrix is adopted as measurement matrix, which is controlled by chaos map. The measurement is scrambled. Compared with the methods adopting the Gaussian random matrix as measurement matrix, and those using the whole measurement matrix as key, the proposed algorithm reduces the burden of transferring key and is more practical. The proposed algorithm with sensitive keys and nice image compression ability can resist various attacks. Simulation results verify the validity and reliability of the proposed algorithm.
收起
摘要 :
This letter explores the use of adaptive prediction length in clustered differential pulse code modulation (C-DPCM) lossless compression method for hyperspectral images. In the C-DPCM method, linear prediction is performed using c...
展开
This letter explores the use of adaptive prediction length in clustered differential pulse code modulation (C-DPCM) lossless compression method for hyperspectral images. In the C-DPCM method, linear prediction is performed using coefficients optimized for each spectral cluster separately. The difference between the predicted and original values is entropy coded using an adaptive range coder for each cluster. The results show that the C-DPCM-with-adaptive-prediction-length method has lower bit-per-pixel value than the original C-DPCM method for Consultative Committee for Space Data Systems 2006 AVIRIS test images. Both calibrated and uncalibrated image compression results are improved by adaptive prediction length.
收起
摘要 :
In 1996, the JPEG committee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG 2000, has resulted in a comprehensive standa...
展开
In 1996, the JPEG committee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG 2000, has resulted in a comprehensive standard (TSO 15444|TTU-T Recommendation T.800) that is being issued in six parts. Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International Standard at the end of 2000. Parts 2-6 define extensions to both the compression technology and the file format and are currently in various stages of development. In this paper, a technical description of Part 1 of the JPEG 2000 standard is provided, and the rationale behind the selected technologies is explained. Although the JPEG 2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications.
收起
摘要 :
Recent deep learning models outperform standard lossy image compression codecs. However, applying these models on a patch-by-patch basis requires that each image patch be encoded and decoded independently. The influence from adjac...
展开
Recent deep learning models outperform standard lossy image compression codecs. However, applying these models on a patch-by-patch basis requires that each image patch be encoded and decoded independently. The influence from adjacent patches is therefore lost, leading to block artefacts at low bitrates. We propose the Binary Inpainting Network (BINet), an autoencoder framework which incorporates binary inpainting to reinstate interdependencies between adjacent patches, for improved patch-based compression of still images. When decoding a patch, BINet additionally uses the binarised encodings from surrounding patches to guide its reconstruction. In contrast to sequential inpainting methods where patches are decoded based on previous reconstructions, BINet operates directly on the binary codes of surrounding patches without access to the original or reconstructed image data. Encoding and decoding can therefore be performed in parallel. We demonstrate that BINet improves the compression quality of a competitive deep image codec across a range of compression levels.
收起
摘要 :
Based on compressive sensing and double random encryption strategy, a novel color image compression and encryption scheme is proposed in this paper. The architecture of compression, confusion and diffusion is adopted. Firstly, the...
展开
Based on compressive sensing and double random encryption strategy, a novel color image compression and encryption scheme is proposed in this paper. The architecture of compression, confusion and diffusion is adopted. Firstly, the red, green and blue components of color plain image are converted to three sparse coefficient matrices by discrete wavelet transform (DWT), and then a double random position permutation (DRPP) is introduced to confuse the coefficient matrices. Subsequently, Logistic-Tent system is utilized to generate the asymptotic deterministic random measurement matrix based on chaotic system and plain image (ADMMCP), which is used to measure the coefficient matrices to obtain measurement value matrices. Moreover, simultaneous double random pixel diffusion between inter-intra components (SDRDIC) is presented to modify the elements of measurement value matrices to obtain the final cipher image. A 4-D hyperchaotic system is applied to produce chaotic sequences for confusion and diffusion, the initial conditions of the used chaotic systems are controlled by the SHA 512 hash value of plain image and external keys, such that the proposed image cryptosystem may withstand known-plaintext and chosen-plaintext attacks. Experimental results and security analyses verify the effectiveness of the proposed cipher.
收起